Goto

Collaborating Authors

 validity index


Benchmarking of Clustering Validity Measures Revisited

Simpson, Connor, Campello, Ricardo J. G. B., Stojanovski, Elizabeth

arXiv.org Machine Learning

Clustering is an unsupervised learning technique that aims to identify patterns that consist of similar or interrelated observations within data [39, 87]. Many existing clustering algorithms are often categorised into three primary groups [39, 82]: partitioning algorithms such as K-Means [39] and Spectral Clustering [88], hierarchical algorithms such as Single Linkage [39] and HDBSCAN* [7, 8], and soft (fuzzy or probabilistic) algorithms such as Fuzzy c-Means (FCM) [4] and Expectation Maximisation with Gaussian Mixture Models (EM-GMM) [20]. Partitioning clustering algorithms partition data into a given number of k clusters, while hierarchical clustering algorithms produce a sequence of nested partitions with incrementally varying numbers of clusters. Soft clustering algorithms are similar to partitioning techniques except that each data observation is assigned a degree of membership or probability to each cluster, rather than a full assignment to a single cluster. It is worth mentioning that within the aforementioned categories there are clustering algorithms that may not necessarily assign all observations to clusters, due to outlier trimming or noise detection. Two examples of such algorithms are trimmed K-means [14] and the previously mentioned HDBSCAN*, each of which may produce solutions where not all observations are assigned to clusters. Clustering validation or validity is an important step of the clustering process irrespective of the algorithm used [39, 25], as it is crucial to determine the best produced partition(s) and number of clusters within the data [23].


Wasserstein-based Kernels for Clustering: Application to Power Distribution Graphs

Oneto, Alfredo, Gjorgiev, Blazhe, Sansavini, Giovanni

arXiv.org Artificial Intelligence

Many data clustering applications must handle objects that cannot be represented as vector data. In this context, the bag-of-vectors representation can be leveraged to describe complex objects through discrete distributions, and the Wasserstein distance can effectively measure the dissimilarity between them. Additionally, kernel methods can be used to embed data into feature spaces that are easier to analyze. Despite significant progress in data clustering, a method that simultaneously accounts for distributional and vectorial dissimilarity measures is still lacking. To tackle this gap, this work explores kernel methods and Wasserstein distance metrics to develop a computationally tractable clustering framework. The compositional properties of kernels allow the simultaneous handling of different metrics, enabling the integration of both vectors and discrete distributions for object representation. This approach is flexible enough to be applied in various domains, such as graph analysis and image processing. The framework consists of three main components. First, we efficiently approximate pairwise Wasserstein distances using multiple reference distributions. Second, we employ kernel functions based on Wasserstein distances and present ways of composing kernels to express different types of information. Finally, we use the kernels to cluster data and evaluate the quality of the results using scalable and distance-agnostic validity indices. A case study involving two datasets of 879 and 34,920 power distribution graphs demonstrates the framework's effectiveness and efficiency.


Improving internal cluster quality evaluation in noisy Gaussian mixtures

de Amorim, Renato Cordeiro, Makarenkov, Vladimir

arXiv.org Machine Learning

Improving clustering quality evaluation in noisy Gaussian mixtures Renato Cordeiro de Amorim Vladimir Makarenkov Abstract Clustering is a well-established technique in machine learning and data analysis, widely used across various domains. Cluster validity indices, such as the Average Silhouette Width, Calinski-Harabasz, and Davies-Bouldin indices, play a crucial role in assessing clustering quality when external ground truth labels are unavailable. However, these measures can be affected by the feature relevance issue, potentially leading to unreliable evaluations in high-dimensional or noisy data sets. We introduce a theoretically grounded Feature Importance Rescaling (FIR) method that enhances the quality of clustering validation by adjusting feature contributions based on their dispersion. It attenuates noise features, clarifies clustering compactness and separation, and thereby aligns clustering validation more closely with the ground truth. Through extensive experiments on synthetic data sets under different configurations, we demonstrate that FIR consistently improves the correlation between the values of cluster validity indices and the ground truth, particularly in settings with noisy or irrelevant features. The results show that FIR increases the robustness of clustering evaluation, reduces variability in performance across different data sets, and remains effective even when clusters exhibit significant overlap. These findings highlight the potential of FIR as a valuable enhancement of clustering validation, making it a practical tool for unsupervised learning tasks where labelled data is unavailable. Mila - Quebec AI Institute, Montreal, QC, Canada.Keywords: Cluster validity indices, data rescaling, noisy data. 1 Introduction Clustering is a fundamenta technique in machine learning and data analysis, which is central to many exploratory methods.


CavDetect: A DBSCAN Algorithm based Novel Cavity Detection Model on Protein Structure

Adhikari, Swati, Roy, Parthajit

arXiv.org Artificial Intelligence

Cavities on the structures of proteins are formed due to interaction between proteins and some small molecules, known as ligands. These are basically the locations where ligands bind with proteins. Actual detection of such locations is all-important to succeed in the entire drug design process. This study proposes a Voronoi Tessellation based novel cavity detection model that is used to detect cavities on the structure of proteins. As the atom space of protein structure is dense and of large volumes and the DBSCAN (Density Based Spatial Clustering of Applications with Noise) algorithm can handle such type of data very well as well as it is not mandatory to have knowledge about the numbers of clusters (cavities) in data as priori in this algorithm, this study proposes to implement the proposed algorithm with the DBSCAN algorithm.


From A-to-Z Review of Clustering Validation Indices

Hassan, Bryar A., Tayfor, Noor Bahjat, Hassan, Alla A., Ahmed, Aram M., Rashid, Tarik A., Abdalla, Naz N.

arXiv.org Artificial Intelligence

Data clustering involves identifying latent similarities within a dataset and organizing them into clusters or groups. The outcomes of various clustering algorithms differ as they are susceptible to the intrinsic characteristics of the original dataset, including noise and dimensionality. The effectiveness of such clustering procedures directly impacts the homogeneity of clusters, underscoring the significance of evaluating algorithmic outcomes. Consequently, the assessment of clustering quality presents a significant and complex endeavor. A pivotal aspect affecting clustering validation is the cluster validity metric, which aids in determining the optimal number of clusters. The main goal of this study is to comprehensively review and explain the mathematical operation of internal and external cluster validity indices, but not all, to categorize these indices and to brainstorm suggestions for future advancement of clustering validation research. In addition, we review and evaluate the performance of internal and external clustering validation indices on the most common clustering algorithms, such as the evolutionary clustering algorithm star (ECA*). Finally, we suggest a classification framework for examining the functionality of both internal and external clustering validation measures regarding their ideal values, user-friendliness, responsiveness to input data, and appropriateness across various fields. This classification aids researchers in selecting the appropriate clustering validation measure to suit their specific requirements.


A new validity measure for fuzzy c-means clustering

Kim, Dae-Won, Lee, Kwang H.

arXiv.org Artificial Intelligence

ABSTRACT: A new cluster validity index is proposed for fuzzy clusters obtained from fuzzy c-means algorithm. The proposed validity index exploits inter-cluster proximity between fuzzy clusters. Inter-cluster proximity is used to measure the degree of overlap between clusters. A low proximity value refers to well-partitioned clusters. The best fuzzy c-partition is obtained by minimizing inter-cluster proximity with respect to c. Well-known data sets are tested to show the effectiveness and reliability of the proposed index.


A Bayesian cluster validity index

Wiroonsri, Nathakhun, Preedasawakul, Onthada

arXiv.org Artificial Intelligence

Selecting the number of clusters is one of the key processes when applying clustering algorithms. To fulfill this task, various cluster validity indices (CVIs) have been introduced. Most of the cluster validity indices are defined to detect the optimal number of clusters hidden in a dataset. However, users sometimes do not expect to get the optimal number of groups but a secondary one which is more reasonable for their applications. This has motivated us to introduce a Bayesian cluster validity index (BCVI) based on existing underlying indices. This index is defined based on either Dirichlet or Generalized Dirichlet priors which result in the same posterior distribution. Our BCVI is then tested based on the Wiroonsri index (WI), and the Wiroonsri-Preedasawakul index (WP) as underlying indices for hard and soft clustering, respectively. We compare their outcomes with the original underlying indices, as well as a few more existing CVIs including Davies and Bouldin (DB), Starczewski (STR), Xie and Beni (XB), and KWON2 indices. Our proposed BCVI clearly benefits the use of CVIs when experiences matter where users can specify their expected range of the final number of clusters. This aspect is emphasized by our experiment classified into three different cases. Finally, we present some applications to real-world datasets including MRI brain tumor images. Our tools will be added to a new version of the recently developed R package ``UniversalCVI''.


A correlation-based fuzzy cluster validity index with secondary options detector

Wiroonsri, Nathakhun, Preedasawakul, Onthada

arXiv.org Machine Learning

The optimal number of clusters is one of the main concerns when applying cluster analysis. Several cluster validity indexes have been introduced to address this problem. However, in some situations, there is more than one option that can be chosen as the final number of clusters. This aspect has been overlooked by most of the existing works in this area. In this study, we introduce a correlation-based fuzzy cluster validity index known as the Wiroonsri-Preedasawakul (WP) index. This index is defined based on the correlation between the actual distance between a pair of data points and the distance between adjusted centroids with respect to that pair. We evaluate and compare the performance of our index with several existing indexes, including Xie-Beni, Pakhira-Bandyopadhyay-Maulik, Tang, Wu-Li, generalized C, and Kwon2. We conduct this evaluation on four types of datasets: artificial datasets, real-world datasets, simulated datasets with ranks, and image datasets, using the fuzzy c-means algorithm. Overall, the WP index outperforms most, if not all, of these indexes in terms of accurately detecting the optimal number of clusters and providing accurate secondary options. Moreover, our index remains effective even when the fuzziness parameter $m$ is set to a large value. Our R package called UniversalCVI used in this work is available at https://CRAN.R-project.org/package=UniversalCVI.


A new nonparametric interpoint distance-based measure for assessment of clustering

Modak, Soumita

arXiv.org Artificial Intelligence

A new interpoint distance-based measure is proposed to identify the optimal number of clusters present in a data set. Designed in nonparametric approach, it is independent of the distribution of given data. Interpoint distances between the data members make our cluster validity index applicable to univariate and multivariate data measured on arbitrary scales, or having observations in any dimensional space where the number of study variables can be even larger than the sample size. Our proposed criterion is compatible with any clustering algorithm, and can be used to determine the unknown number of clusters or to assess the quality of the resulting clusters for a data set. Demonstration through synthetic and real-life data establishes its superiority over the well-known clustering accuracy measures of the literature.


Are Cluster Validity Measures (In)valid?

Gagolewski, Marek, Bartoszuk, Maciej, Cena, Anna

arXiv.org Artificial Intelligence

Internal cluster validity measures (such as the Calinski-Harabasz, Dunn, or Davies-Bouldin indices) are frequently used for selecting the appropriate number of partitions a dataset should be split into. In this paper we consider what happens if we treat such indices as objective functions in unsupervised learning activities. Is the optimal grouping with regards to, say, the Silhouette index really meaningful? It turns out that many cluster (in)validity indices promote clusterings that match expert knowledge quite poorly. We also introduce a new, well-performing variant of the Dunn index that is built upon OWA operators and the near-neighbour graph so that subspaces of higher density, regardless of their shapes, can be separated from each other better.